46 research outputs found

    Ontological representation, integration, and analysis of LINCS cell line cells and their cellular responses

    Full text link
    Abstract Background Aiming to understand cellular responses to different perturbations, the NIH Common Fund Library of Integrated Network-based Cellular Signatures (LINCS) program involves many institutes and laboratories working on over a thousand cell lines. The community-based Cell Line Ontology (CLO) is selected as the default ontology for LINCS cell line representation and integration. Results CLO has consistently represented all 1097 LINCS cell lines and included information extracted from the LINCS Data Portal and ChEMBL. Using MCF 10A cell line cells as an example, we demonstrated how to ontologically model LINCS cellular signatures such as their non-tumorigenic epithelial cell type, three-dimensional growth, latrunculin-A-induced actin depolymerization and apoptosis, and cell line transfection. A CLO subset view of LINCS cell lines, named LINCS-CLOview, was generated to support systematic LINCS cell line analysis and queries. In summary, LINCS cell lines are currently associated with 43 cell types, 131 tissues and organs, and 121 cancer types. The LINCS-CLO view information can be queried using SPARQL scripts. Conclusions CLO was used to support ontological representation, integration, and analysis of over a thousand LINCS cell line cells and their cellular responses.https://deepblue.lib.umich.edu/bitstream/2027.42/140390/1/12859_2017_Article_1981.pd

    Multi-ancestry GWAS reveals excitotoxicity associated with outcome after ischaemic stroke

    Get PDF
    During the first hours after stroke onset, neurological deficits can be highly unstable: some patients rapidly improve, while others deteriorate. This early neurological instability has a major impact on long-term outcome. Here, we aimed to determine the genetic architecture of early neurological instability measured by the difference between the National Institutes of Health Stroke Scale (NIHSS) within 6 h of stroke onset and NIHSS at 24 h. A total of 5876 individuals from seven countries (Spain, Finland, Poland, USA, Costa Rica, Mexico and Korea) were studied using a multi-ancestry meta-analyses. We found that 8.7% of NIHSS at 24 h of variance was explained by common genetic variations, and also that early neurological instability has a different genetic architecture from that of stroke risk. Eight loci (1p21.1, 1q42.2, 2p25.1, 2q31.2, 2q33.3, 5q33.2, 7p21.2 and 13q31.1) were genome-wide significant and explained 1.8% of the variability suggesting that additional variants influence early change in neurological deficits. We used functional genomics and bioinformatic annotation to identify the genes driving the association from each locus. Expression quantitative trait loci mapping and summary data-based Mendelian randomization indicate that ADAM23 (log Bayes factor = 5.41) was driving the association for 2q33.3. Gene-based analyses suggested that GRIA1 (log Bayes factor = 5.19), which is predominantly expressed in the brain, is the gene driving the association for the 5q33.2 locus. These analyses also nominated GNPAT (log Bayes factor = 7.64) ABCB5 (log Bayes factor = 5.97) for the 1p21.1 and 7p21.1 loci. Human brain single-nuclei RNA-sequencing indicates that the gene expression of ADAM23 and GRIA1 is enriched in neurons. ADAM23, a presynaptic protein and GRIA1, a protein subunit of the AMPA receptor, are part of a synaptic protein complex that modulates neuronal excitability. These data provide the first genetic evidence in humans that excitotoxicity may contribute to early neurological instability after acute ischaemic stroke. Ibanez et al. perform a multi-ancestry meta-analysis to investigate the genetic architecture of early stroke outcomes. Two of the eight genome-wide significant loci identified-ADAM23 and GRIA1-are involved in synaptic excitability, suggesting that excitotoxicity contributes to neurological instability after ischaemic stroke.Peer reviewe

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Global Unique identification LINCS Digital Research Objects to Enable Citation, Reuse, and Persistence of LINCS Data.

    No full text
    The ability to cite LINCS datasets is critical for users and data producers alike. Requirements for dataset citation records have been set forth by the Joint Declaration of Data Citation Principles (JDDCP) and include attribution, a unique identifier, data persistence, verification, and interoperability. Data citation is also an important facilitator of the FAIR (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific datasets. DOIs are well aligned with journal tracking citations, and the cost is justified within the business goal. However, LINCS datasets are complex and require granular identifiers of various LINCS Digital Research Objects (DROs), including a dataset at a specific data level, a dataset group (combining all data levels from one experiment), derived datasets (e.g. computationally reprocessed, by a LINCS Center or an outside group) and also various LINCS metadata. Such identifiers are needed to describe provenance. Although DOI have been used for datasets, we preferred an open and free solution. To accomplish that we created collections of LINCS DROs in the MIRIAM Registry to generate unique, perennial and location-independent identifiers. Such collections include data-level specific dataset packages, dataset groups, small molecules, and cells. The identifiers.org service which is built upon the information stored in MIRIAM, provides directly resolvable identifiers in the form of Uniform Resource Locators (URLs). This system provides a globally unique identification scheme to which any external resource can point and a resolving system that gives the owner / creator of the resource collection flexibility to update the resolving URL without changing the global identifiers. These dataset and dataset group identifiers are the central component of the LINCS dataset citation record, which further includes the authors, title, year, repository, resource type and version. These citation records have been incorporated into the LINCS Data Portal and can be downloaded in several formats making it easy to cite a specific LINCS dataset or a dataset group. The LINCS provenance model provides a record of creation, manipulation, and source of the dataset and metadata that are part of a LINCS dataset package. It will provide mappings of LINCS dataset packages to corresponding records in public repositories and at the data generation centers. Persistent global identifiers of LINCS DROs, formal dataset provenance, and mappings of key LINCS metadata to external qualified references (such as ontologies) also are required for the persistence of LINCS beyond the funded project and independent from the current LINCS Centers

    Formalization, annotation and analysis of diverse drug and probe screening assay datasets using the BioAssay Ontology (BAO).

    Get PDF
    Huge amounts of high-throughput screening (HTS) data for probe and drug development projects are being generated in the pharmaceutical industry and more recently in the public sector. The resulting experimental datasets are increasingly being disseminated via publically accessible repositories. However, existing repositories lack sufficient metadata to describe the experiments and are often difficult to navigate by non-experts. The lack of standardized descriptions and semantics of biological assays and screening results hinder targeted data retrieval, integration, aggregation, and analyses across different HTS datasets, for example to infer mechanisms of action of small molecule perturbagens. To address these limitations, we created the BioAssay Ontology (BAO). BAO has been developed with a focus on data integration and analysis enabling the classification of assays and screening results by concepts that relate to format, assay design, technology, target, and endpoint. Previously, we reported on the higher-level design of BAO and on the semantic querying capabilities offered by the ontology-indexed triple store of HTS data. Here, we report on our detailed design, annotation pipeline, substantially enlarged annotation knowledgebase, and analysis results. We used BAO to annotate assays from the largest public HTS data repository, PubChem, and demonstrate its utility to categorize and analyze diverse HTS results from numerous experiments. BAO is publically available from the NCBO BioPortal at http://bioportal.bioontology.org/ontologies/1533. BAO provides controlled terminology and uniform scope to report probe and drug discovery screening assays and results. BAO leverages description logic to formalize the domain knowledge and facilitate the semantic integration with diverse other resources. As a consequence, BAO offers the potential to infer new knowledge from a corpus of assay results, for example molecular mechanisms of action of perturbagens

    FAIR LINCS Metadata Powered by CEDAR Cloud-Based Templates and Services

    No full text
    The Library of Integrated Network-based Signatures (LINCS) program generates a wide variety of cell-based perturbation-response signatures using diverse assay technologies. For example, LINCS includes large-scale transcriptional profiling of genetic and small molecule perturbations, and various proteomics and imaging datasets. We currently obtain metadata through an online platform, the metadata submission tool (MST), based off the use of spreadsheet data templates. While functional, it remains difficult to maintain FAIR standards, specifically remaining findable and re-usable, for metadata without (enforced) controlled vocabulary and internally built linkages to ontologies and metadata standards. To maintain FAIR-centric metadata, we have worked with the Center for Enhanced Data Annotation and Retrieval (CEDAR), to develop modular metadata templates linked to ontologies and standards present in the NCBO Bioportal. We have also developed a new LINCS Dataset Submission Tool (DST), which links new LINCS datasets to the form-fillable CEDAR templates. This metadata management framework supports authoring, curation, validation, management, and sharing of LINCS metadata, while building upon the existing LINCS metadata standards and data-release workflows. Additionally, the CEDAR technology facilitates metadata validation and testing testing, enabling users to ensure their input metadata are LINCS compliant prior to submission for public release. CEDAR templates have been developed for reagent metadata, experimental metadata, to describe assays, and to capture global dataset attributes. Integrating the submission of all these components into one submission tool and workflow we aim to significantly simplify and streamline the workflow of LINCS dataset submission, processing, validation, registration, and publication. As other projects apply the same approach, many more datasets will become cross-searchable and can be linked optimizing the metadata pathway from submission to discovery
    corecore